Extensions to Regularised Discriminant Analysis
نویسندگان
چکیده
Regularised Discriminant Analysis has proven to be a most eeective classiier for problems where traditional classiiers fail because of a lack of suucient training samples, as is often the case in high dimensional settings. However, it has been shown that the model selection procedure of Regularised Discriminant Analysis, determining the degree of regu-larisation, has some deeciencies associated with it. We propose a modiied model selection procedure based on a new appreciation function. By means of an extensive simulation it was shown that the new model selection procedure performs better than the original one. We also propose that one of the control parameters of Regularised Discriminant Analysis be allowed to take on negative values. This extension leads to an improved performance in certain situations. The results are connrmed using two chemical data sets.
منابع مشابه
Face recognition using a novel image representation scheme and multi-scale local features
This paper presents a new method for improving face recognition performance under difficult conditions. Specifically, a new image representation scheme is proposed which is derived from the YCrQ colour space using principal component analysis (PCA) followed by Fisher linear discriminant analysis (FLDA). A multi-scale local feature, LBP-DWT, is used for face representation which is computed by e...
متن کاملBayesian comparison of spatially regularised general linear models.
In previous work (Penny et al., [2005]: Neuroimage 24:350-362) we have developed a spatially regularised General Linear Model for the analysis of functional magnetic resonance imaging data that allows for the characterisation of regionally specific effects using Posterior Probability Maps (PPMs). In this paper we show how it also provides an approximation to the model evidence. This is importan...
متن کاملOptimising Kernel Parameters and Regularisation Coefficients for Non-linear Discriminant Analysis
In this paper we consider a novel Bayesian interpretation of Fisher’s discriminant analysis. We relate Rayleigh’s coefficient to a noise model that minimises a cost based on the most probable class centres and that abandons the ‘regression to the labels’ assumption used by other algorithms. Optimisation of the noise model yields a direction of discrimination equivalent to Fisher’s discriminant,...
متن کاملMultiview Fisher Discriminant Analysis
CCA can be seen as a multiview extension of PCA, in which information from two sources is used for learning by finding a subspace in which the two views are most correlated. However PCA, and by extension CCA, does not use label information. Fisher Discriminant Analysis uses label information to find informative projections, which can be more informative in supervised learning settings. We show ...
متن کاملlp norm multiple kernel Fisher discriminant analysis for object and image categorisation
In this paper, we generalise multiple kernel Fisher discriminant analysis (MK-FDA) such that the kernel weights can be regularised with an `p norm for any p ≥ 1, in contrast to existing MK-FDA that uses either l1 or l2 norm. We present formulations for both binary and multiclass cases and solve the associated optimisation problems efficiently with semi-infinite programming. We show on three obj...
متن کامل